Thursday, March 16, 2023

Unit-I Client/Server Computing

Class: MSc(SE)SY Unit I Sub: Client Server Technology 

Unit-I 

Client/Server Computing 

DBMS concept and architecture, Single system image, Client Server  architecture, mainframe centric client server computing, downsizing and client  server computing, preserving 

mainframe applications investment through porting, client server development  tools, advantages of client server computing.



DBMS concept and architecture 

Database is a collection of related data and data is a collection of facts  and figures that can be processed to produce information. 

Mostly data represents recordable facts. Data aids in producing  information, which is based on facts. For example, if we have data about  marks obtained by all students, we can then conclude about toppers and  average marks. 

A database management system stores data in such a way that it  becomes easier to retrieve, manipulate, and produce information. 

Characteristics 

Traditionally, data was organized in file formats. DBMS was a new concept  then, and all the research was done to make it overcome the deficiencies in  traditional style of data management. A modern DBMS has the following  characteristics − 

Real-world entity − A modern DBMS is more realistic and uses real world entities to design its architecture. It uses the behavior and attributes  too. For example, a school database may use students as an entity and  their age as an attribute. 

Relation-based tables − DBMS allows entities and relations among them  to form tables. A user can understand the architecture of a database just  by looking at the table names. 

Isolation of data and application − A database system is entirely  different than its data. A database is an active entity, whereas data is said  to be passive, on which the database works and organizes. DBMS also  stores metadata, which is data about data, to ease its own process. 

Less redundancy − DBMS follows the rules of normalization, which  splits a relation when any of its attributes is having redundancy in values. 

Class: MSc(SE)SY Unit I Sub: Client Server Technology 

Normalization is a mathematically rich and scientific process that reduces  data redundancy. 

Consistency − Consistency is a state where every relation in a database  remains consistent. There exist methods and techniques, which can detect  attempt of leaving database in inconsistent state. A DBMS can provide  greater consistency as compared to earlier forms of data storing  applications like file-processing systems. 

Security − Features like multiple views offer security to some extent  where users are unable to access data of other users and departments.  DBMS offers methods to impose constraints while entering data into the  database and retrieving the same at a later stage. DBMS offers many  different levels of security features, which enables multiple users to have  different views with different features. 

Users 

A typical DBMS has users with different rights and permissions who use it for  different purposes. Some users retrieve data and some back it up. The users of a  DBMS can be broadly categorized as follows − 

Administrators − Administrators maintain the DBMS and are  responsible for administrating the database. They are responsible to look  after its usage and by whom it should be used. They create access profiles  for users and apply limitations to maintain isolation and force security.  Administrators also look after DBMS resources like system license,  required tools, and other software and hardware related maintenance. 

Designers − Designers are the group of people who actually work on the  designing part of the database. They keep a close watch on what data  should be kept and in what format. They identify and design the whole set  of entities, relations, constraints, and views. 

End Users − End users are those who actually reap the benefits of having  a DBMS. End users can range from simple viewers who pay attention to  the logs or market rates to sophisticated users such as business analysts.

Class: MSc(SE)SY Unit I Sub: Client Server Technology 

DBMS Architecture: 

The design of a DBMS depends on its architecture. It can be centralized or  decentralized or hierarchical. The architecture of a DBMS can be seen as either  single tier or multi-tier. An n-tier architecture divides the whole system into  related but independent n modules, which can be independently modified,  altered, changed, or replaced. 

In 1-tier architecture, the DBMS is the only entity where the user directly sits on  the DBMS and uses it. Any changes done here will directly be done on the  DBMS itself. It does not provide handy tools for end-users. Database designers  and programmers normally prefer to use single-tier architecture. 

If the architecture of DBMS is 2-tier, then it must have an application through  which the DBMS can be accessed. Programmers use 2-tier architecture where  they access the DBMS by means of an application. Here the application tier is  entirely independent of the database in terms of operation, design, and  

programming. 

3-tier Architecture 

A 3-tier architecture separates its tiers from each other based on the complexity  of the users and how they use the data present in the database. It is the most  widely used architecture to design a DBMS.

Class: MSc(SE)SY Unit I Sub: Client Server Technology 

Database (Data) Tier − At this tier, the database resides along with its  query processing languages. We also have the relations that define the  data and their constraints at this level. 

Application (Middle) Tier − At this tier reside the application server and  the programs that access the database. For a user, this application tier  presents an abstracted view of the database. End-users are unaware of any  existence of the database beyond the application. At the other end, the  database tier is not aware of any other user beyond the application tier.  Hence, the application layer sits in the middle and acts as a mediator  between the end-user and the database. 

User (Presentation) Tier − End-users operate on this tier and they know  nothing about any existence of the database beyond this layer. At this  layer, multiple views of the database can be provided by the application.  All views are generated by applications that reside in the application tier. 

Multiple-tier database architecture is highly modifiable, as almost all its  components are independent and can be changed independently. 

\Single System Image ( SSI ): 

A single system image (SSI) is a distributed computing method in which the  system hides the distributed nature of the available resources from the users.  The computer cluster, therefore, appears to be a single computer to users. This  property can be enabled through software mechanisms or extended hardware. 

An SSI presents users with a globalized view of all the resources in the cluster,  irrespective of the nodes to which they are physically connected, while hiding  the fact that they are associated with different nodes. SSI also ensures  multiprocessing and an even load balancing among the systems. The machines  focus on system availability, scalable performance and transparency of resource  management. 

The features of a single system image include: 

Single User Interface: Users interact with the cluster through a single  GUI. 

Single Process Space: Every user process holds a unique cluster-wide  process ID. A process on a node creates a child process on the same or a  completely different node. Communication between processes residing on  different nodes is also possible.

Class: MSc(SE)SY Unit I Sub: Client Server Technology 

Single Entry Point: Users connect to multiple nodes in the cluster through  a virtual host, which acts as single entry point. The connection request  moves to different hosts to balance the entire load. 

Single I/O Space: This permits all nodes to perform I/O operations on  local or remote disk devices. 

The benefits of using a single system image include: 

It provides a similar syntax as that used in other systems, reducing  operating errors. 

Users can work in their preferred interface, which is then altered by the  administrator to manage the entire cluster as a single entity. 

It reduces cost of ownership and simplifies system management. It provides a straightforward view of all activities from a single node in  the entire cluster. 

The end user is not concerned about where the application runs. It avoids using numerous skilled administrators, because only one is  needed to centralize system management. 

It promotes standard tool development. 

It provides location-independent message communication. Client server architecture: 

Client-server architecture is also called of the “Client/Server Network”  or “Network computing Model“, because in this architecture all services and  requests are spread over the network. Its functionality like as distributed  computing system because in which all components are performing their tasks  independently from each other. 

Client-server architecture is a shared computer network architecture where  several clients (remote system) send many requests and finally to obtained  services from the centralized server machine (host system). Client machine  delivers user-friendly interface that helps to users to fire request services of  server computer and finally to show your output on client system.

Class: MSc(SE)SY Unit I Sub: Client Server Technology 

Diagram of Client Server Architecture 

Advantages of client server architecture: 

Improved Data Sharing: Data is retained by usual business processes and  manipulated on a server is available for designated users (clients) over an  authorized access. The use of Structured Query Language (SQL) supports open  access from all client aspects and also transparency in network services depict  that similar data is being shared among users. 

Integration of Services: Every client is given the opportunity to access  corporate information via the desktop interface eliminating the necessity to log  into a terminal mode or another processor. Desktop tools like spreadsheet,  power point presentations etc can be used to deal with corporate data with the  help of database and application servers resident on the network to produce  meaningful information.  

Shared Resources amongst Different Platforms: Applications used for  client/server model is built regardless of the hardware platform or technical  background of the entitled software (Operating System S/W) providing an open  computing environment, enforcing users to obtain the services of clients and  servers (database, application, communication servers).  

Inter-Operation of Data: All development tools used for client/server  applications access the back-end database server through SQL, an industry standard data definition and access language, helpful for consistent management  of corporate data. Advanced database products enable user/application to gain a  merged view of corporate data dispersed over several platforms. Rather than a 

Class: MSc(SE)SY Unit I Sub: Client Server Technology 

single target platform this ensures database integrity with the ability to perform  updates on multiple locations enforcing quality recital and recovery.  

Data Processing capability despite the location: We are in an era which  undergoes a transformation of machine-centered systems to user-centered  systems. Machine-centered systems like mainframe, mini-micro applications  had unique access platforms and functionality keys, navigation options,  performance and security were all visible. Through client/server users can  directly log into a system despite of the location or technology of the  processors.  

Easy maintenance: Since c+lient/server architecture is a distributed  model representing dispersed responsibilities among independent computers  integrated across a network, it’s an advantage in terms of maintenance. It’s easy  to replace, repair, upgrade and relocate a server while clients remain unaffected.  This unawareness of change is called as encapsulation. 

Disadvantages of Client-Server Architecture: 

Overloaded Servers: 

When there are frequent simultaneous client requests, server severely get  overloaded, forming traffic congestion. 

Impact of Centralized Architecture: 

Since it is centralized, if a critical server failed, client requests are not  accomplished. Therefore, client-server lacks the robustness of a good network. 

Mainframe-Centric Client/Server Computing  

The mainframe-centric model uses the presentation capabilities of the  workstation to front-end existing applications. The character mode interface is  remapped by products such as Easel and Mozart. The same data is displayed or  entered through the use of pull-down lists, scrollable fields, check boxes, and  buttons; the user interface is easy to use, and information is presented more  clearly. In this mainframe-centric model, mainframe applications continue to  run unmodified, because the existing terminal data stream is processed by the  workstation-based communications API. 

Class: MSc(SE)SY Unit I Sub: Client Server Technology 

The availability of products such as UniKix and IBM's CICS OS/2 and 6000  can enable the entire mainframe processing application to be moved unmodified  to the workstation. This protects the investment in existing applications while  improving performance and reducing costs.  

Character mode applications, usually driven from a block mode screen, attempt  to display as much data as possible in order to reduce the number of  transmissions required to complete a function. Dumb terminals impose  limitations on the user interface including fixed length fields, fixed length lists,  crowded screens, single or limited character fonts, limited or no graphics icons,  and limited windowing for multiple application display. In addition, the fixed  layout of the screen makes it difficult to support the display of conditionally  derived information.  

In contrast, the workstation GUI provides facilities to build the screen  dynamically. This enables screens to be built with a variable format based  conditionally on the data values of specific fields. Variable length fields can be  scrollable, and lists of fields can have a scrollable number of rows. This enables  a much larger virtual screen to be used with no additional data communicated  between the client workstation and server.  

Windowing can be used to pull up additional information such as help text,  valid value lists, and error messages without losing the original screen contents.  

The more robust GUI facilities of the workstation enable the user to navigate  easily around the screen.  

Additional information can be encapsulated by varying the display's colors,  fonts, graphics icons, scrollable lists, pull-down lists, and option boxes. Option  lists can be provided to enable users to quickly select input values. Help can be  provided, based on the context and the cursor location, using the same pull down list facilities.  

Although it is a limited use of client/server computing capability, a GUI front  end to an existing application is frequently the first client/server-like application  implemented by organizations familiar with the host mainframe and dumb terminal approach. The GUI preserves the existing investment while providing  the benefits of ease of use associated with a GUI. It is possible to provide  dramatic and functionally rich changes to the user interface without host  application change.  

The next logical step is the provision of some edit and processing logic  executing at the desktop workstation. This additional logic can be added without  requiring changes in the host application and may reduce the host transaction 

Class: MSc(SE)SY Unit I Sub: Client Server Technology 

rate by sending up only valid transactions. With minimal changes to the host  application, network traffic can be reduced and performance can be improved  by using the workstation's processing power to encode the datastream into a  compressed form.  

A more interactive user interface can be provided with built-in, context sensitive help, and extensive prompting and user interfaces that are sensitive to  the users' level of expertise. These options can be added through the use of  workstation processing power. These capabilities enable users to operate an  existing system with less intensive training and may even provide the  opportunity for public access to the applications.  

Electronic data interchange (EDI) is an example of this front-end processing.  EDI enables organizations to communicate electronically with their suppliers or  customers. Frequently, these systems provide the workstation front end to deal  with the EDI link but continue to work with the existing back-end host system  applications. Messages are reformatted and responses are handled by the EDI  client, but application processing is done by the existing application server.  Productivity may be enhanced significantly by capturing information at the source and making it available to all authorized users. Typically, if users  employ a multipart form for data capture, the form data is entered into multiple  systems. Capturing this information once to a server in a client/server  application, and reusing the data for several client applications can reduce  errors, lower data entry costs, and speed up the availability of this information.  

Figure 1.9 illustrates how multiple applications can be integrated in this way.  The data is available to authorized users as soon as it is captured. There is no  delay while the forms are passed around the organization. This is usually a  better technique than forms imaging technology in which the forms are created  and distributed internally in an organization. The use of workflow-management  technology and techniques, in conjunction with imaging technology, is an  effective way of handling this process when forms are filled out by a person  who is physically remote from the organization.  

Intelligent Character Recognition (ICR) technology can be an extremely  effective way to automate the capture of data from a form, without the need to  key. Current experience with this technique shows accuracy rates greater than  99.5 percent for typed forms and greater than 98.5 percent for handwritten  forms.  

Figure : Desktop application integration.

Class: MSc(SE)SY Unit I Sub: Client Server Technology

Downsizing and Client/Server Computing  

Rightsizing and downsizing are strategies used with the client/server model to  take advantage of the lower cost of workstation technology. Rightsizing and  upsizing may involve the addition of more diverse or more powerful computing  resources to an enterprise computing environment. The benefits of rightsizing  are reduction in cost and/or increased functionality, performance, and flexibility  in the applications of the enterprise. Significant cost savings usually are  obtained from a resulting reduction in employee, hardware, software, and  maintenance expenses. Additional savings typically accrue from the improved  effectiveness of the user community using client/server technology.  

Downsizing is frequently implemented in concert with a flattening of the  organizational hierarchy.  

Eliminating middle layers of management implies empowerment to the first  level of management with the decision-making authority for the whole job.  Information provided at the desktop by networked PCs and workstations  integrated with existing host (such as mainframe and minicomputer)  applications is necessary to facilitate this empowerment. These desktop-host  integrated systems house the information required to make decisions quickly.  To be effective, the desktop workstation must provide access to this information  as part of the normal business practice. Architects and developers must work  closely with business decision makers to ensure that new applications and  systems are designed to be integrated with effective business processes. Much  of the cause of poor return on technology investment is attributable to a lack of  understanding by the designers of the day-to-day business impact of their  solutions.  

Downsizing information systems is more than an attempt to use cheaper  workstation technologies to replace existing mainframes and minicomputers in 

10 

Class: MSc(SE)SY Unit I Sub: Client Server Technology 

use. Although some benefit is obtained by this approach, greater benefit is  obtained by reengineering the business processes to really use the capabilities of  the desktop environment. Systems solutions are effective only when they are  seen by the actual user to add value to the business process.  

Client/server technology implemented on low-cost standard hardware will drive  downsizing. Client/server computing makes the desktop the users' enterprise. As  we move from the machine-centered era of computing into the workgroup era,  the desktop workstation is empowering the business user to regain ownership of  his or her information resource. Client/server computing combines the best of  the old with the new—the reliable multiuser access to shared data and resources  with the intuitive, powerful desktop workstation.  

Object-oriented development concepts are embodied in the use of an SDE  created for an organization from an architecturally selected set of tools. The  SDE provides more effective development and maintenance than companies  have experienced with traditional host-based approaches.  

Client/server computing is open computing. Mix and match is the rule.  Development tools and development environments must be created with both  openness and standards in mind.  

Mainframe applications rarely can be downsized—without modifications—to a  workstation environment. Modifications can be minor, wherein tools are used to  port (or rehost) existing mainframe source code—or major, wherein the  applications are rewritten using completely new tools. In porting, native  COBOL compilers, functional file systems, and emulators for DB2, IMS  DB/DC, and CICS are available for workstations. In rewriting, there is a broad  array of tools ranging from PowerBuilder, Visual Basic, and Access, to larger  scale tools such as Forte and Dynasty.  

Preserving Your Mainframe Applications Investment Through Porting  

Although the percentage of client/server applications development is rapidly  moving away from a mainframe-centric model, it is possible to downsize and  still preserve a larger amount of the investment in application code. For  example, the Micro Focus COBOL/2 Workbench by Micro Focus Company  Inc., and XDB Systems Inc., bundles products from Innovative Solutions Inc.,  Stingray Software Company Inc., and XDB Systems Inc., to provide the  capability to develop systems on a PC LAN for production execution on an 

11 

Class: MSc(SE)SY Unit I Sub: Client Server Technology 

IBM mainframe. These products, in conjunction with the ProxMVS product  from Proximity Software, enable extensive unit and integration testing to be  done on a PC LAN before moving the system to the mainframe for final system  and performance testing. Used within a properly structured development  environment, these products can dramatically reduce mainframe development  costs.  

Micro Focus COBOL/2 supports GUI development targeted for implementation  with OS/2 Presentation Manager and Microsoft Windows 3.x. Another Micro  Focus product, the Dialog System, provides support for GUI and character  mode applications that are independent of the underlying COBOL applications.  

Micro Focus has added an Object Oriented (OO) option to its workbench to  facilitate the creation of reusable components. The OO option supports  integration with applications developed under Smalltalk/V PM.  

IBM's CICS for OS/2, OS400, RS6000, and HP/UX products enable developers  to directly port applications using standard CICS call interfaces from the  mainframe to the workstation. These applications can then run under OS/2,  AIX, OS400, HP/UX, or MVS/VSE without modification. This promises to  enable developers to create applications for execution in the CICS MVS  environment and later to port them to these other environments without  modification. Conversely, applications can be designed and built for such  environments and subsequently ported to MVS (if this is a logical move).  Organizations envisioning such a migration should ensure that their SDE  incorporates standards that are consistent for all of these platforms.  

To help ensure success in using these products, the use of a COBOL code  generator, such as Computer Associates' (previously Pansophic) Telon PWS,  provides the additional advantages of a higher level of syntax for systems  development. Telon provides particularly powerful facilities that support the  object-oriented development concepts necessary to create a structured  development environment and to support code and function reuse. The  generated COBOL is input to the Micro Focus Workbench toolkit to support  prototyping and rapid application development. Telon applications can be  generated to execute in the OS/2, UNIX AIX, OS400, IMS DB/DC, CICS DLI,  DB2, IDMS, and Datacom DB environments. This combination—used in  conjunction with a structured development environment that includes  appropriate standards—provides the capability to build single-system image  applications today. In an environment that requires preservation of existing  host-based applications, this product suite is among the most complete for client/server computing. 

12 

Class: MSc(SE)SY Unit I Sub: Client Server Technology 

These products, combined with the cheap processing power available on the  workstation, make the workstation LAN an ideal development and maintenance  environment for existing host processors. When an organization views  mainframe or minicomputer resources as real dollars, developers can usually  justify offloading the development in only three to six months. Developers can  be effective only when a proper systems development environment is put in  place and provided with a suite of tools offering the host capabilities plus  enhanced connectivity. Workstation operating systems are still more primitive  than the existing host server MVS, VMS, or UNIX operating systems.  Therefore, appropriate standards and procedures must be put in place to coordinate shared development. The workstation environment will change. Only  projects built with common standards and procedures will be resilient enough to  remain viable in the new environment.  

The largest savings come from new projects that can establish appropriate  standards at the start and do all development using the workstation LAN  environment. It is possible to retrofit standards to an existing environment and  establish a workstation with a LAN-based maintenance environment. The  benefits are less because retrofitting the standards creates some costs. However,  these costs are justified when the application is scheduled to undergo significant  maintenance or if the application is very critical and there is a desire to reduce  the error rate created by changes. The discipline associated with the movement  toward client/server-based development, and the transfer of code between the  host and client/server will almost certainly result in better testing and fewer  errors. The testing facilities and usability of the workstation will make the  developer and tester more effective and therefore more accurate.  

Business processes use database, communications, and application services. In  an ideal world, we pick the best servers available to provide these services,  thereby enabling our organizations to enjoy the maximum benefit that current  technology provides. Real-world developers make compromises around the  existing technology, existing application products, training investments, product  support, and a myriad other factors.  

Key to the success of full client/server applications is selecting an appropriate  application and technical architecture for the organization. Once the technical  architecture is defined, the tools are known. The final step is to implement an  

SDE to define the standards needed to use the tools effectively. This SDE is the  collection of hardware, software, standards, standard procedures, interfaces, and  training built up to support the organization's particular needs.  

The Real World of Client/Server Development Tools 

13 

Class: MSc(SE)SY Unit I Sub: Client Server Technology 

Many construction projects fail because their developers assume that a person  with a toolbox full of carpenter's tools is a capable builder. To be a successful  builder, a person must be trained to build according to standards. The creation  of standards to define interfaces to the sewer, water, electrical utilities, road,  

school, and community systems is essential for successful, cost-effective  building. We do not expect a carpenter to design such interfaces individually for  every building. Rather, pragmatism discourages imagination in this regard. By  reusing the models previously built to accomplish integration, we all benefit  from cost and risk reduction.  

Computer systems development using an SDE takes advantage of these same  concepts: Let's build on what we've learned. Let's reuse as much as possible to  save development costs, reduce risk, and provide the users with a common  "look and feel."  

Selecting a good set of tools affords an opportunity to be successful. Without  the implementation of a comprehensive SDE, developers will not achieve such  success.  

The introduction of a whole new generation of Object Technology based tools  for client/server development demands that proper standards be put in place to  support shared development, reusable code, interfaces to existing systems,  security, error handling, and an organizational standard "look and feel." As with  any new technology, there will be changes. Developers can build application  systems closely tied to today's technology or use an SDE and develop  applications that can evolve along with the technology platform.  

The Advantages of Client/Server Computing  

The client/server computing model provides the means to integrate personal  productivity applications for an individual employee or manager with specific  business data processing needs to satisfy total information processing  requirements for the entire enterprise.  

Enhanced Data Sharing  

Data that is collected as part of the normal business process and maintained on a  server is immediately available to all authorized users. The use of Structured  Query Language (SQL) to define and manipulate the data provides support for  open access from all client processors and software. SQL grants all authorized  users access to the information through a view that is consistent with their  business need. Transparent network services ensure that the same data is  available with the same currency to all designated users. 

14 

Class: MSc(SE)SY Unit I Sub: Client Server Technology 

Integrated Services  

In the client/server model, all information that the client (user) is entitled to use  is available at the desktop. There is no need to change into terminal mode or log  into another processor to access information. All authorized information and  processes are directly available from the desktop interface. The desktop tools— e-mail, spreadsheet, presentation graphics, and word processing—are available  and can be used to deal with information provided by application and database  servers resident on the network. Desktop users can use their desktop tools in  conjunction with information made available from the corporate systems to  produce new and useful information.  

Figure 2.1 shows a typical example of this integration. A word-processed  document that includes input from a drawing package, a spreadsheet, and a  custom-developed application can be created. The facilities of Microsoft's  Dynamic Data Exchange (DDE) enable graphics and spreadsheet data to be cut  and pasted into the word-processed document along with the window of  information extracted from a corporate database. The result is displayed by the  custom application.  

Creation of the customized document is done using only desktop tools and the  mouse to select and drag information from either source into the document. The  electronic scissors and glue provide powerful extensions to existing applications  and take advantage of the capability of the existing desktop processor. The  entire new development can be done by individuals who are familiar only with  personal productivity desktop tools. Manipulating the spreadsheet object, the  graphics object, the application screen object, and the document object using the  desktop cut and paste tools provides a powerful new tool to the end user.  

Developers use these same object manipulation capabilities under program  control to create new applications in a fraction of the time consumed by  traditional programming methods. Object-oriented development techniques are  dramatically increasing the power available to nonprogrammers and user  professionals to build and enhance applications.  

Another excellent and easily visualized example of the integration possible in  the client/server model is implemented in the retail automobile service station.  Figure 2.2 illustrates the comprehensive business functionality required in a  retail gas service station. The service station automation (SSA) project  integrates the services of gasoline flow measurement, gas pumps billing, credit  card validation, cash registers management, point-of-sale, inventory control,  attendance recording, electronic price signs, tank monitors, accounting,  marketing, truck dispatch, and a myriad of other business functions. These  business functions are all provided within the computer-hostile environment of 

15 

Class: MSc(SE)SY Unit I Sub: Client Server Technology 

the familiar service station with the same type of workstations used to create  this book. The system uses all of the familiar client/server components,  including local and wide-area network services. Most of the system users are  transitory employees with minimal training in computer technology. An  additional challenge is the need for real-time processing of the flow of gasoline  as it moves through the pump. If the processor does not detect and measure the  flow of gasoline, the customer is not billed. The service station automation  system is a classic example of the capabilities of an integrated client/server  application implemented and working today.  

Figure 2.2. Integrated retail outlet system architecture. 

Sharing Resources Among Diverse Platforms  

The client/server computing model provides opportunities to achieve true open  system computing. Applications may be created and implemented without  regard to the hardware platforms or the technical characteristics of the software.  Thus, users may obtain client services and transparent access to the services  provided by database, communications, and applications servers. Operating  systems software and platform hardware are independent of the application and  masked by the development tools used to build the application.  

In this approach, business applications are developed to deal with business  processes invoked by the existence of a user-created "event." An event such as 

16 

Class: MSc(SE)SY Unit I Sub: Client Server Technology 

the push of a button, selection of a list element, entry in a dialog box, scan of a  bar code, or flow of gasoline occurs without the application logic being  sensitive to the physical platforms.  

Client/server applications operate in one of two ways. They can function as the  front end to an existing application—the more limited mainframe-centric model  discussed in Chapter 1—or they can provide data entry, storage, and reporting  by using a distributed set of clients and servers. In either case, the use—or even  the existence—of a mainframe host is totally masked from the workstation  developer by the use of standard interfaces such as SQL.  

Data Interchangeability and Interoperability  

SQL is an industry-standard data definition and access language. This standard  definition has enabled many vendors to develop production-class database  engines to manage data as SQL tables. Almost all the development tools used  for client/server development expect to reference a back-end database server  accessed through SQL. Network services provide transparent connectivity  between the client and local or remote servers. With some database products,  such as Ingres Star, a user or application can define a consolidated view of data  that is actually distributed between heterogeneous, multiple platforms.  

Systems developers are finally reaching the point at which this heterogeneity  will be a feature of all production-class database engine products. Most systems  that have been implemented to date use a single target platform for data  maintenance. The ability to do high-volume updates at multiple locations and  maintain database integrity across all types of errors is just becoming available  with production-level quality performance and recovery. Systems developed  today that use SQL are inherently transparent to data storage location and the  technology of the data storage platform. The SQL syntax does not specify a  location or platform. This transparency enables tables to be moved to other  platforms and locations without affecting the application code. This feature is  especially valuable when adopting proven, new technology or if it makes  business sense to move data closer to its owner.  

Database services can be provided in response to an SQL request—without  regard to the underlying engine. This engine can be provided by vendors such as  ASK/Ingres, Oracle, Sybase, or IBM running on Windows NT, OS/2, UNIX, or  MVS platform. The system development environment (SDE) and tools must  implement the interfaces to the vendor database and operating system products.  The developer does not need to know which engine or operating system is  running. If the SDE does not remove the developer from direct access to the  database server platform, the enthusiasm to be efficient will prevent developers  from avoiding the use of "features" available only from a specific vendor. The

17 

Class: MSc(SE)SY Unit I Sub: Client Server Technology 

transparency of platform is essential if the application is to remain portable.  Application portability is essential when taking advantage of innovation in  technology and cost competitiveness, and in providing protection from the  danger of vendor failure. 

Database products, such as Sybase used with the Database Gateway product  from Micro DecisionWare, provide direct, production-quality, and transparent  connectivity between the client and servers. These products may be  implemented using DB2, IMS/DB, or VSAM through CICS into DB2, and  Sybase running under VMS, Windows NT, OS/2, DOS, and MacOS. Bob  Epstein, executive vice president of Sybase, Inc., views Sybase's open server  approach to distributed data as incorporating characteristics of the semantic  heterogeneity solution.1 In this solution, the code at the remote server can be  used to deal with different database management systems (DBMSs), data  models, or processes. The remote procedure call (RPC) mechanism used by  Sybase can be interpreted as a message that invokes the appropriate method or  procedure on the open server. True, somebody has to write the code that masks  the differences. However, certain parts—such as accessing a foreign DBMS  (like Sybase SQL Server to IBM DB2)—can be standardized.  

ASK's Ingres Star product provides dynamic SQL to support a distributed  database between UNIX and MVS. Thus, Ingres Windows 4GL running under  DOS or UNIX as a client can request a data view that involves data on the  UNIX Ingres and MVS DB2 platform. Ingres is committed to providing static  SQL and IMS support in the near future. Ingres' Intelligent Database engine will  optimize the query so that SQL requests to distributed databases are handled in  a manner that minimizes the number of rows moved from the remote server.  This optimization is particularly crucial when dynamic requests are made to  distributed databases. With the announcement of the Distributed Relational  Database Architecture (DRDA), IBM has recognized the need for open access  from other products to DB2. This product provides the app-lication program  interfaces (APIs) necessary for other vendors to generate static SQL requests to  the DB2 engine running under MVS. Norris van den Berg, manager of Strategy  for Programming Systems at IBM's Santa Teresa Laboratory in San Jose,  California, points out that IBM's Systems Application Architecture (SAA)  DBMSs are different. Even within IBM, they must deal with the issues of data  interchange and interoperability in a heterogeneous environment.2 More  importantly, IBM is encouraging third-party DBMS vendors to comply with its  DRDA. This is a set of specifications that will enable all DBMSs to  interoperate.  

The client/server model provides the capability to make ad hoc requests for  information. As a result, optimization of dynamic SQL and support for 

18 

Class: MSc(SE)SY Unit I Sub: Client Server Technology 

distributed databases are crucial for the success of the second generation of a  client/server application. The first generation implements the operational  aspects of the business process. The second generation is the introduction of ad  hoc requests generated by the knowledgeable user looking to gain additional  insight from the information available.  

Masked Physical Data Access  

When SQL is used for data access, users can access information from databases  anywhere in the network. From the local PC, local server, or wide area network  (WAN) server, data access is supported with the developer and user using the  same data request. The only noticeable difference may be performance  degradation if the network bandwidth is inadequate. Data may be accessed from  dynamic random-access memory (D-RAM), from magnetic disk, or from  optical disk, with the same SQL statements. Logical tables can be accessed— without any knowledge of the ordering of columns or awareness of extraneous  columns—by selecting a subset of the columns in a table. Several tables may be  joined into a view that creates a new logical table for application program  manipulation, without regard to its physical storage format.  

The use of new data types, such as binary large objects (BLOBs), enables other  types of information such as images, video, and audio to be stored and accessed  using the same SQL statements for data access. RPCs frequently include data  conversion facilities to translate the stored data of one processor into an  acceptable format for another.  

Location Independence of Data and Processing  

We are moving from the machine-centered computing era of the 1970s and  1980s to a new era in which PC-familiar users demand systems that are user centered. Previously, a user logged into a mainframe, mini-, or  microapplication. The syntax of access was unique in each platform. Function  keys, error messages, navigation methods, security, performance, and editing  were all very visible. Today's users expect a standard "look and feel." Users log  into an application from the desktop with no concern for the location or  technology of the processors involved.  

Figure 2.3 illustrates the evolution of a user's view of the computing platform.  In the 1970s, users logged into the IBM mainframe, the VAX minicomputer, or  one of the early microcomputer applications. It was evident which platform was  being used. Each platform required a unique login sequence, security  parameters, keyboard options, and custom help, navigation, and error recovery.  In the current user-centered world, the desktop provides the point of access to  the workgroup and enterprise services without regard to the platform of 

19 

Class: MSc(SE)SY Unit I Sub: Client Server Technology 

application execution. Standard services such as login, security, navigation,  help, and error recovery are provided consistently among all applications.  

Figure 2.3. The computing transformation. 

Developers today are provided with considerable independence. Data is  accessed through SQL without regard to the hardware, operating system, or  location providing the data. Consistent network access methods envelop the  application and SQL requests within an RPC. The network may be based in  Open Systems Interconnect (OSI), Transmission Control Protocol/Internet  Protocol (TCP/IP), or Systems Network Architecture (SNA), but no changes are  required in the business logic coding. The developer of business logic deals  with a standard process logic syntax without considering the physical platform.  Development languages such as COBOL, C, and Natural, and development  tools such as Telon, Ingres 4GL, PowerBuilder, CSP, as well as some evolving  CASE tools such as Bachman, Oracle CASE, and Texas Instruments' IEF all  execute on multiple platforms and generate applications for execution on  multiple platforms.  

The application developer deals with the development language and uses a  version of SDE customized for the organization to provide standard services.  The specific platform characteristics are transparent and subject to change  without affecting the application syntax. 

20 

Class: MSc(SE)SY Unit I Sub: Client Server Technology 

Centralized Management  

As processing steers away from the central data center to the remote office and  plant, workstation server, and local area network (LAN) reliability must  approach that provided today by the centrally located mini- and mainframe  computers. The most effective way to ensure this is through the provision of  monitoring and support from these same central locations. A combination of  technologies that can "see" the operation of hardware and software on the  LAN—monitored by experienced support personnel—provides the best  opportunity to achieve the level of reliability required.  

The first step in effectively providing remote LAN management is to establish  standards for hardware, software, networking, installation, development, and  naming. These standards, used in concert with products such as IBM's  Systemview, Hewlett-Packard's Openview, Elegant's ESRA, Digital's EMA, and  AT&T's UNMA products, provide the remote view of the LAN. Other tools,  such as PC Connect for remote connect, PCAssure from Centel for security,  products for hardware and software inventory, and local monitoring tools such  as Network General's Sniffer, are necessary for completing the management  process.

21 


No comments:

Post a Comment